1,490 research outputs found

    Equations involving fractional Laplacian operator: Compactness and application

    Full text link
    In this paper, we consider the following problem involving fractional Laplacian operator: \begin{equation}\label{eq:0.1} (-\Delta)^{\alpha} u= |u|^{2^*_\alpha-2-\varepsilon}u + \lambda u\,\, {\rm in}\,\, \Omega,\quad u=0 \,\, {\rm on}\, \, \partial\Omega, \end{equation} where Ω\Omega is a smooth bounded domain in RN\mathbb{R}^N, ε[0,2α2)\varepsilon\in [0, 2^*_\alpha-2), 0<α<1,2α=2NN2α0<\alpha<1,\, 2^*_\alpha = \frac {2N}{N-2\alpha}. We show that for any sequence of solutions unu_n of \eqref{eq:0.1} corresponding to εn[0,2α2)\varepsilon_n\in [0, 2^*_\alpha-2), satisfying unHC\|u_n\|_{H}\le C in the Sobolev space HH defined in \eqref{eq:1.1a}, unu_n converges strongly in HH provided that N>6αN>6\alpha and λ>0\lambda>0. An application of this compactness result is that problem \eqref{eq:0.1} possesses infinitely many solutions under the same assumptions.Comment: 34 page

    Semantic Object Parsing with Local-Global Long Short-Term Memory

    Full text link
    Semantic object parsing is a fundamental task for understanding objects in detail in computer vision community, where incorporating multi-level contextual information is critical for achieving such fine-grained pixel-level recognition. Prior methods often leverage the contextual information through post-processing predicted confidence maps. In this work, we propose a novel deep Local-Global Long Short-Term Memory (LG-LSTM) architecture to seamlessly incorporate short-distance and long-distance spatial dependencies into the feature learning over all pixel positions. In each LG-LSTM layer, local guidance from neighboring positions and global guidance from the whole image are imposed on each position to better exploit complex local and global contextual information. Individual LSTMs for distinct spatial dimensions are also utilized to intrinsically capture various spatial layouts of semantic parts in the images, yielding distinct hidden and memory cells of each position for each dimension. In our parsing approach, several LG-LSTM layers are stacked and appended to the intermediate convolutional layers to directly enhance visual features, allowing network parameters to be learned in an end-to-end way. The long chains of sequential computation by stacked LG-LSTM layers also enable each pixel to sense a much larger region for inference benefiting from the memorization of previous dependencies in all positions along all dimensions. Comprehensive evaluations on three public datasets well demonstrate the significant superiority of our LG-LSTM over other state-of-the-art methods.Comment: 10 page

    Interpretable Structure-Evolving LSTM

    Full text link
    This paper develops a general framework for learning interpretable data representation via Long Short-Term Memory (LSTM) recurrent neural networks over hierarchal graph structures. Instead of learning LSTM models over the pre-fixed structures, we propose to further learn the intermediate interpretable multi-level graph structures in a progressive and stochastic way from data during the LSTM network optimization. We thus call this model the structure-evolving LSTM. In particular, starting with an initial element-level graph representation where each node is a small data element, the structure-evolving LSTM gradually evolves the multi-level graph representations by stochastically merging the graph nodes with high compatibilities along the stacked LSTM layers. In each LSTM layer, we estimate the compatibility of two connected nodes from their corresponding LSTM gate outputs, which is used to generate a merging probability. The candidate graph structures are accordingly generated where the nodes are grouped into cliques with their merging probabilities. We then produce the new graph structure with a Metropolis-Hasting algorithm, which alleviates the risk of getting stuck in local optimums by stochastic sampling with an acceptance probability. Once a graph structure is accepted, a higher-level graph is then constructed by taking the partitioned cliques as its nodes. During the evolving process, representation becomes more abstracted in higher-levels where redundant information is filtered out, allowing more efficient propagation of long-range data dependencies. We evaluate the effectiveness of structure-evolving LSTM in the application of semantic object parsing and demonstrate its advantage over state-of-the-art LSTM models on standard benchmarks.Comment: To appear in CVPR 2017 as a spotlight pape

    Reversible Recursive Instance-level Object Segmentation

    Full text link
    In this work, we propose a novel Reversible Recursive Instance-level Object Segmentation (R2-IOS) framework to address the challenging instance-level object segmentation task. R2-IOS consists of a reversible proposal refinement sub-network that predicts bounding box offsets for refining the object proposal locations, and an instance-level segmentation sub-network that generates the foreground mask of the dominant object instance in each proposal. By being recursive, R2-IOS iteratively optimizes the two sub-networks during joint training, in which the refined object proposals and improved segmentation predictions are alternately fed into each other to progressively increase the network capabilities. By being reversible, the proposal refinement sub-network adaptively determines an optimal number of refinement iterations required for each proposal during both training and testing. Furthermore, to handle multiple overlapped instances within a proposal, an instance-aware denoising autoencoder is introduced into the segmentation sub-network to distinguish the dominant object from other distracting instances. Extensive experiments on the challenging PASCAL VOC 2012 benchmark well demonstrate the superiority of R2-IOS over other state-of-the-art methods. In particular, the APr\text{AP}^r over 2020 classes at 0.50.5 IoU achieves 66.7%66.7\%, which significantly outperforms the results of 58.7%58.7\% by PFN~\cite{PFN} and 46.3%46.3\% by~\cite{liu2015multi}.Comment: 9 page

    当代中国农村生态丶生计的文化批判 : 以山西 蒲韩社区 为例

    Full text link
    在发展主义语境下,农村常常被视为衰败和落后的地区,农村发展也多被看成由贫变富的经济化过程,尤其依赖对各种“生态资源”的开发。在这一过程中,人被塑造成将“资源”变成“资本”的主体,同时人自身也被当成某种资源,乡村社会似乎被裹挟。而当代乡村建设运动,即使提出各种创新或可持续发展策略,亦未能轻易摆脱上述发展逻辑,常常使运动自身陷入困境。 本研究以山西“蒲韩社区”为主要案例,借助Felix Guattari关于三生态(the three ecologies)的视野,尝试指出,思考当下中国乡村的发展问题,须跳出一般的环境保护和经济发展论述,以及以人为中心、理性主义的主体形而上学,引入生态哲学(ecosophy)思想,在更宏观的人类生存视野下开展主体性(subjectivity)生产领域的研究,尤其要重视无意识(unconscious)的层面,重新把握构成主体性的各种力量,在乡村社区的自我组织和生产中寻求不一样的应对危机的出路。 本文探索了“蒲韩社区”如何致力于生产去辖域化(deterritorialization)的主体性,脱离同质化的辖域和个人主义的束缚,让农民可以在日常生活的各个领域重新构建集体实践活动,通过再独特化(resingularization)应对生态和生计危机。在当前的生态问题以及经济发展主导的危机面前,农民群体既不完全是被压抑和被动,也不具有充分的自主创新空间;农村社区/集体作为一个组合体(assemblage),是在各种构件(components)关系变化中持续生成的,大气候有各种限制和辖域,自我组织的集体具有潜存(virtual)的抗衡空间,在压抑和抗衡的相互斗争中持续生产出不同的主体性。 本文认为,当代的乡村建设运动,应该以主体性的生产作为突破点,克服宏大论述,在农民具体生活的领域重新把握各种潜存关系和力量,让农民可以在重新构建集体实践活动中走向再独特化的生存之路
    corecore